Single-Microphone Speech Enhancement Inspired by Auditory System
نویسنده
چکیده
Title of dissertation: Single-Microphone Speech Enhancement Inspired by Auditory System Majid Mirbagheri, Doctor of Philosophy, 2014 Dissertation directed by: Professor Shihab Shamma, Department of Electrical and Computer Enhancing quality of speech in noisy environments has been an active area of research due to the abundance of applications dealing with human voice and dependence of their performance on this quality. While original approaches in the field were mostly addressing this problem in a pure statistical framework in which the goal was to estimate speech from its sum with other independent processes (noise), during last decade, the attention of the scientific community has turned to the functionality of human auditory system. A lot of effort has been put to bridge the gap between the performance of speech processing algorithms and that of average human by borrowing the models suggested for the sound processing in the auditory system. In this thesis, we will introduce algorithms for speech enhancement inspired by two of these models i.e. the cortical representation of sounds and the hypothesized role of temporal coherence in the auditory scene analysis. After an introduction to the auditory system and the speech enhancement framework we will first show how traditional speech enhancement technics such as wiener-filtering can benefit on the feature extraction level from discriminatory capabilities of spectro-temporal representation of sounds in the cortex i.e. the cortical model. We will next focus on the feature processing as opposed to the extraction stage in the speech enhancement systems by taking advantage of models hypothesized for human attention for sound segregation. We demonstrate a mask-based enhancement method in which the temporal coherence of features is used as a criterion to elicit information about their sources and more specifically to form the masks needed to suppress the noise. Lastly, we explore how the two blocks for feature extraction and manipulation can be merged into one in a manner consistent with our knowledge about auditory system. We will do this through the use of regularized non-negative matrix factorization to optimize the feature extraction and simultaneously account for temporal dynamics to separate noise from speech. SINGLE-MICROPHONE SPEECH ENHANCEMENT INSPIRED BY AUDITORY SYSTEM
منابع مشابه
A Generalized Time–Frequency Subtraction Method for Robust Speech Enhancement Based on Wavelet Filter Banks Modeling of Human Auditory System
We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-tonoise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized tim...
متن کاملSpeech Detection and Enhancement Using Single Microphone for Distant Speech Applications in Reverberant Environments
It is well known that in reverberant environments, the human auditory system has the ability to pre-process reverberant signals to compensate for reflections and obtain effective cues for improved recognition. In this study, we propose such a preprocessing technique for combined detection and enhancement of speech using a single microphone in reverberant environments for distant speech applicat...
متن کاملIn-Ear Microphone Hybrid Speech Enhancement
This paper presents a novel speech enhancement approach for performing noise reduction in severely disturbed environments. A small microphone for communication purposes is placed inside the external auditory canal to pick up the speech signal originating from the speech production organ. The speech enhancement is achieved by using three different noise reduction methods: High frequencies are at...
متن کاملSingle-Microphone Speech Separation: The use of Speech Models
Separation of speech sources is fundamental for robust communication. In daily conversations, signals reaching our ears generally consist of target speech sources, interference signals from competing speakers and ambient noise. Take an example, talking with someone in a cocktail party and making a phone call in a train compartment. Fig. 1 shows a typical indoor environment having multiple sound...
متن کاملA Dual-Microphone Speech Enhancement Algorithm for Close-Talk System
While human listening is robust in complex auditory scenes, current speech enhancement algorithms do not perform well in noisy environments, even close-talk system is used. This paper addresses the robustness in dual microphone embedded close talk system by employing a computational auditory scene analysis (CASA) framework. The energy difference between the two microphones is used as the primar...
متن کامل